One of the things we struggle with in investigations as analysts is even talking about them in an educated way. Someone asks you how you found something and it's, "I looked in the logs". Well, no... you did a lot more than that! 1/
You identified a cue that made you think there were other related events to be found, and those events could indicate an attack. Then you considered which of those events would be most meaningful to disposing the timeline you found. 2/
After that, you formed an investigative question that helped you hone in on exactly what you're looking for. With the question formed, you queried the log evidence to return a data set that you hoped would provide an answer. 3/
There was a lot of data there, so you refined your query and reduced the data set to something more manageable. You then aggregated the unique values of a field and sorted by least frequent occurrence. 4/
Then, you skimmed the list of values and focused on the one that had some weird syntax conventions. You did some Googling and found a report from a vendor that gave you more details about malware related to that file name. 5/
You formed another investigative question to look for a secondary indicator of that malware based on the report; this time a very specific one. You performed a different log query to find your answer and confirm the infection. 6/
So, that's how you found that when you looked at the logs. When we're teaching, learning, or evaluating our investigative workflows thats how we start to talk about things in a meaningful way. We break it all into pieces and give those pieces names. 7/
All these pieces and names form mental models (chrissanders.org/2019/05/infose…) and frameworks (chrissanders.org/2016/05/how-an…) for teaching and learning and doing. This is also how we become more metacognitively aware. 8/
For the individual analyst, this all leads you to a place where you're accelerating the rate at which you gain experience and to where you have mechanisms for discovering and aligning with actual best practices (rather than individual case studies presented as them). 9/
Make thinking visible. We're getting there, y'all. 10/10

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Chris Sanders 🍯

Chris Sanders 🍯 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @chrissanders88

25 May
When I write about analyst skills I often want to add a section about metacognitive skills. However, it's sometimes redundant because those skills appear alongside all the other skills analysts leverage.
For example, good analysts often know their limitations. They know what evidence sources they are weak in (knowledge regulation) and seek alternative investigative pathways to reach conclusions (knowledge regulation). That's essential metacognitive stuff.
Sometimes that's easy to deal with. There are a lot of ways to prove program execution (OS logs, prefetch, registry, and so on) and most mid-level analysts are comfortable with at least one of them. Not knowing one isn't a massive burden because you can use others.
Read 11 tweets
20 Apr
Let's talk about PREVALENCE ANALYSIS. This is one of the most useful concepts for analysts to understand because it drives so many actions. Prevalence is basically what proportion of a population shares a specific characteristic. 1/
First, prevalence is often an anomaly detection technique. Let's say you've found a process running with a name you don't recognize. If it's running on every host on the network you might say it's more likely to be benign. 2/
If the process is only running on one or a couple of hosts, that could be a bit more suspicious. Is there a common thread between the hosts? A pattern? There's more work here. 3/
Read 15 tweets
5 Apr
From recent research...

Upon notification of potential malware infection, SOC analysts tend to spend more time trying to confirm the malware infection, whereas IR/DF analysts tend to assume infection and move toward understanding impact.
Usually, this results in different investigative actions. Confirming infection focuses more on the leading portion of the timeline relevant to the current event. Exploring impact focuses more on the trailing portion of the timeline.
Sometimes the investigative actions can look the same, but that depends on the malware and how the infection presents. Even with similar investigative actions, the intent is different.
Read 7 tweets
2 Apr
The natural thing for inexperienced analysts to want to do is jump to the worst case scenario and begin investigating that thing. After all, the bad thing is very bad! But, that's usually a bad idea for at least three reasons. 1/
First, all investigations are based on questions. You use existing evidence to drive questions whose answers you pursue in evidence. If there is no evidence that indicates the very bad thing, you are probably jumping the gun by looking for it. It's a reach. 2/
Second, the very bad thing is often very hard to investigate. Exfil is a prime example. The techniques for investigating and proving data exfil are often time-consuming and cognitively demanding. Now you're distracting yourself from the actual evidence you already have. 3/
Read 8 tweets
12 Mar
Over and over again, I observe that highly skilled analysts do something that might seem counter intuitive, but is key to their success.

They constantly review the facts -- what they know. That's the current timeline and the relationships they've uncovered.
Inexperienced analysts resist this sometimes because it feels like it takes up a lot of time. But it's worth the time. This is where analysts discover timeline gaps, identify new investigative questions, and prioritize their next move.
As you might imagine, revisiting the facts depends highly on documenting what you know when you come to know it. That's a habit that gets formed over time but can form faster if you understand the value and are deliberate about it.
Read 6 tweets
19 Jan
For threat hunting, a non-trivial amount of the work is referencing, creating, and updating system and network inventory. This doesn't get talked about enough as a skill set that someone develops. 1/
Threat hunting is all about finding anomalies that automated detection mechanisms don't find. That means manual anomaly detection, which sometimes means weeding out things that are normal. 2/
For example, let's say you discover a binary that runs in the middle of the night on a host and that's weird! So, you eventually search for the prevalence of that behavior and see it running on other hosts in that department. 3/
Read 17 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(