Let's look at process events where the args contain our delivery domain and evil PS:
Process argument events:
process where command_line ==”*hastebin.com*” or command_line == “*Invoke-CVUYDBVIUPNEXMR*”
Cheat sheet below.
Let's also ask: How often does CMD / PS spawn from ScreenConnect.ClientService.exe? The remote admin tool.
Here's the EQL search:
process where child of [process where process_name == "ScreenConnect.ClientService.exe"] on active endpoints
It wasn't common. Detection?
How about CrowdStrike Falcon...
Let's use their event search here:
CommandLine=”*hastebin.com*” OR CommandLine=”*Invoke-CVUYDBVIUPNEXMR*” OR FileName=*ge4545*
Note the use of the *wildcard operator to search for the encrypted file extension
Or maybe you're running MS Defender ATP.
DeviceProcessEvents
| where FileName in~(“cmd.exe”, “powershell.exe”)
| where ProcessCommandLine has “hastebin.com”
or ProcessCommandLine has “Invoke-CVUYDBVIUPNEXMR”
I couldn't include the full query, shared below
Or if your EDR wasn't mentioned above, practice searching for these IOCs:
How often do you see PowerShell download / execute a file from a remote resource?
Endgame:
sequence [process where command_line == "*iex*" and process_name in ("powershell.exe", "powershell_ise.exe")] [network where true]
CrowdStrike Falcon:
FileName="powershell.exe" AND CommandLine="* iex *"
(the spaces in between iex is important to remove fp wildcard matching)
Once you have the ContextProcessID_decimal value, use that in this query to determine outbound network connections:
(ContextProcessId_decimal=<enter_value> OR TargetProcessId_decimal=<enter_value>) AND NetworkConnectCount_decimal >=1
Windows Defender ATP:
union DeviceProcessEvents, DeviceNetworkEvents
| where ProcessCommandLine has “iex”
or ProcessCommandLine has “invoke-expression”
| where FileName in~(“powershell.exe”, “powershell_ise.exe”)
| where isnotempty(RemoteUrl)
Carbon Black:
cmdline:”iex” AND (process_name:powershell.exe OR process_name:powershell_ise.exe) AND netconn_count:[1 TO *]
Before, "well actually" enters the chat. Good to include processes loading system.management.automation
The above were basic queries to help you learn the tools.
Think about the questions you want to ask and practice asking them using EDR.
Don't forget to learn about host containment. Learn how it works, how to use it.
It could be a thing that ends up saving your org.
Or try this....
1. Open PS 2. wmic /node:localhost process call create “cmd.exe /c notepad” 3. winrs:localhost “cmd.exe /c calc” 4. Interrogate your SIEM and EDR 5. Practice containment 6. Do it again 7. Talk about what this mean as a team
• • •
Missing some Tweet in this thread? You can try to
force a refresh
An effective interview includes 🕘 for the applicant to ask questions.
A few to consider if you're interviewing: 1. What are the big problems you're solving this year? 2. One year from now this person has been successful. What did they do?
3. Conversely, six months from now it didn't work out. What happened? 4. How do you measure performance? What's the cadence? 5. What's the typical tenure for this role? 6. Is the team growing or is this hire a backfill? If backfill: can you talk about the employee's new role?
7. Will we have weekly 1:1s. If so, what's a typical agenda? 8. How many direct reports do you have? 9. What's a typical day look like?
If you're unclear on the traits and skills the hiring manager is seeking, ask!
"What are the traits and skills you're seeking for this role?"
1. ISO 2859-1 (#AQL) to determine sample size 2. #Python#Jupyter notebook to perform random selection 3. Check sheet to spot defects 4. Process runs every 24 hrs 5. (Digestible) #Metrics to improve
How'd we get there? Story in /thread
I'll break the thread down into four key points:
1. What we're solving for 2. Guiding principles 3. Our (current) solution 4. Quick recap
My goal is to share what's working for us and how we get there. But I'd love to hear from others. What's working for you?
What we're solving for: All work is high quality, not just incidents.
On a typical day in our #SOC we'll:
- Process Ms of alerts w/ detection engine
- Send 100s to analysts for human judgement
Those 100s of alerts result in:
- Tens of investigations
- Handful of incidents
Highlights from chasing an attacker in #AWS this week:
Initial lead: custom alert using #CloudTrail
- SSH keygen from weird source
IP enrichment helped
Historical context for IAM user, "this isn't normal" #GuardDuty was not initial lead
- Did have LOW sev high vol alerts
Attacker tradecraft:
- Made ingress rules on sec groups that allowed any access to anything in VPC
- Interesting API calls: > 300 AuthorizeSecurityGroupIngress calls
- Spun up new ec2 instance likely to persist
- Mostly recon - "What policy permissions does this IAM user have?"
Investigations:
Orchestration was super helpful. We bring our own.
For any AWS alert we auto acquire:
- Interesting API calls (anything that isn't Get*, List*, Describe*)
- List of assumed roles (+ failures)
- AWS services touched user user/role
- Gave us answers, fast