1. Create an inbox rule to fwd emails to the RSS Subscriptions folder 2. Query your SIEM 3. How often does this happen? 4. Can you build alert or cadence around inbox rule activity?
- Pro-active search for active / historical threats
- Pro-active search for insights
- Insights lead to better understanding of org
- Insights are springboard to action
- Actions improve security / risk / reduce attack surface
With these guiding principles in hand, here's a thread of hunting ideas that will lead to insights about your environment - and those insights should be a springboard to action.
Here are my DCs
Do you see evidence of active / historical credential theft?
Can you tell me the last time we reset the krbtgt account?
Recommendations to harden my org against credential theft?
An effective interview includes 🕘 for the applicant to ask questions.
A few to consider if you're interviewing: 1. What are the big problems you're solving this year? 2. One year from now this person has been successful. What did they do?
3. Conversely, six months from now it didn't work out. What happened? 4. How do you measure performance? What's the cadence? 5. What's the typical tenure for this role? 6. Is the team growing or is this hire a backfill? If backfill: can you talk about the employee's new role?
7. Will we have weekly 1:1s. If so, what's a typical agenda? 8. How many direct reports do you have? 9. What's a typical day look like?
If you're unclear on the traits and skills the hiring manager is seeking, ask!
"What are the traits and skills you're seeking for this role?"
1. ISO 2859-1 (#AQL) to determine sample size 2. #Python#Jupyter notebook to perform random selection 3. Check sheet to spot defects 4. Process runs every 24 hrs 5. (Digestible) #Metrics to improve
How'd we get there? Story in /thread
I'll break the thread down into four key points:
1. What we're solving for 2. Guiding principles 3. Our (current) solution 4. Quick recap
My goal is to share what's working for us and how we get there. But I'd love to hear from others. What's working for you?
What we're solving for: All work is high quality, not just incidents.
On a typical day in our #SOC we'll:
- Process Ms of alerts w/ detection engine
- Send 100s to analysts for human judgement
Those 100s of alerts result in:
- Tens of investigations
- Handful of incidents
Highlights from chasing an attacker in #AWS this week:
Initial lead: custom alert using #CloudTrail
- SSH keygen from weird source
IP enrichment helped
Historical context for IAM user, "this isn't normal" #GuardDuty was not initial lead
- Did have LOW sev high vol alerts
Attacker tradecraft:
- Made ingress rules on sec groups that allowed any access to anything in VPC
- Interesting API calls: > 300 AuthorizeSecurityGroupIngress calls
- Spun up new ec2 instance likely to persist
- Mostly recon - "What policy permissions does this IAM user have?"
Investigations:
Orchestration was super helpful. We bring our own.
For any AWS alert we auto acquire:
- Interesting API calls (anything that isn't Get*, List*, Describe*)
- List of assumed roles (+ failures)
- AWS services touched user user/role
- Gave us answers, fast