1. Collect data, you won't know what it means 2. Collect data, *kind* of understand it 3. Collect data, understand it. Able to say: "This is what's happening, let's try changing *that*" 4. Operational control. "If we do *this*, *that* will happen"
What you measure is mostly irrelevant. It’s that you measure and understand what it means and what you can do to move your process dials up or down.
If you ask questions about your #SOC constantly (ex: how much analyst time do we spend on suspicious logins and how can we reduce that?) - progress is inevitable.
W/o constantly asking questions and answering them using data, scaling/progress is coincidental.
You can apply this methodology to other areas of a business.
Take sales. If we start w/ the premise that “booked meetings” eventually results in new business, how many /week? What causes that # to go up or down? If you know what levers you have to make your process move, $.
Ask yourself these questions:
1. What can I know? 2. How can I know it? 3. What does it mean? 4. What makes the data change? (Events, process, automation, etc) 5. Where do I want to get to? 6. What does great look like? 7. What’s stopping me?
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Gathering my thoughts for a panel discussion tomorrow on scaling #SOC operations in a world with increasing data as part of the Sans #BlueTeamSummit.
No idea where the chat will take us, but luck favors the prepared. A 🧵 of random thoughts likely helpful for a few.
Before you scale anything, start with strategy. What does great look like? Are you already there and now you want to scale? Or do you have some work to do?
Before we scaled anything @expel_io we defined what great #MDR service looked like, and delivered it.
We started with the customer and worked our way back. What does a 10 ⭐ MDR experience look like?
We asked a lot of questions. When an incident happens, when do we notify? How do we notify? What can we tell a customer now vs. what details can we provide later?
Quick 🧵of some of the insights and actions we're sharing with our customers based on Q2 '21 incident data.
TL;DR:
- #BEC in O365 is a huge problem. MFA everywhere, disable legacy protocols.
- We’re 👀 more ransomware attacks. Reduce/control the self-install attack surface.
Insight: #BEC attempts in 0365 was the top threat in Q2 accounting for nearly 50% of the incidents we identified
Actions:
- MFA everywhere you can
- Disable legacy protocols
- Implement conditional access policies
- Consider Azure Identity Protection or MCAS
re: Azure Identity Protection & MCAS: They build data models for each user, making it easier to spot atypical auth events. Also, better logging. There's $ to consider here, I get it. Merely providing practitioner's perspective. They're worth a look if you're struggling with BEC.
We see a lot of variance at the end of Feb that continues into the beginning of Mar. This was due to a number of runaway alerts and some signatures that needed tweaking.
What’s most interesting is that the variance decreases after we released the suppressions features on Mar 17.
We believe this is due to analysts having more granular control of the system and it’s now easier than ever get a poor performing Expel alert back under control.
Process tree below so folks can query / write detections
Also, update!
Detection moments:
- w3wp.exe spawning CMD shell
- PS download cradle to execute code from Internet
- CMD shell run as SYSTEM to run batch script from Public folder
- Many more
Bottom line: a lot of ways to spot this activity.
Build.test.learn.iterate.
Also, update. :)
And some additional details from @heyjokim after further investigating:
Attack vector/Initial Compromise: CVE-2021-27065 exploited on Exchange Server
Foothold: CHOPPER webshells
Payload: DLL Search Order Hijacking (opera_browser.exe, opera_browser.dll, opera_browser.png, code)
1. Create an inbox rule to fwd emails to the RSS Subscriptions folder 2. Query your SIEM 3. How often does this happen? 4. Can you build alert or cadence around inbox rule activity?
- Pro-active search for active / historical threats
- Pro-active search for insights
- Insights lead to better understanding of org
- Insights are springboard to action
- Actions improve security / risk / reduce attack surface
With these guiding principles in hand, here's a thread of hunting ideas that will lead to insights about your environment - and those insights should be a springboard to action.
Here are my DCs
Do you see evidence of active / historical credential theft?
Can you tell me the last time we reset the krbtgt account?
Recommendations to harden my org against credential theft?