When I write about analyst skills I often want to add a section about metacognitive skills. However, it's sometimes redundant because those skills appear alongside all the other skills analysts leverage.
For example, good analysts often know their limitations. They know what evidence sources they are weak in (knowledge regulation) and seek alternative investigative pathways to reach conclusions (knowledge regulation). That's essential metacognitive stuff.
Sometimes that's easy to deal with. There are a lot of ways to prove program execution (OS logs, prefetch, registry, and so on) and most mid-level analysts are comfortable with at least one of them. Not knowing one isn't a massive burden because you can use others.
Sometimes that's hard to deal with. Some analysts don't understand network traffic well, and sometimes there aren't great substitutes for that. Likewise, analysts who are well versed in network traffic but not host analysis limit their investigative pathways too.
You also have evidence sources like memory. In many cases, you can answer questions faster or better with other evidence sources. But sometimes, only memory will do.
Like I said, sometimes analysts may choose alternate pathways based on their limits. Other times, they may research the things they don't know. It often depends on where they are in relation to their goals and external factors like time sensitivity.
Monitoring those sorts of things are metacognitive skills too. Understanding your goals and weighting ways to pursue them against your realistic capabilities.
Knowing what you don't know is a marker of expertise. We're not good at helping folks grasp their own skill level in this field for a lot of reasons, so that's a hard fought skill over time. I and other people are working on that, though.
So this idea of knowing your own knowledge limits and reacting accordingly is a metacognitive skill, but one way it manifests is in how you form investigative questions and pursue specific evidence sources.
All of this requires some degree of intellectual humility. A lot less early on in your career when people expect less of you, but a lot more later on when people depend on you. So, there are social and intrapersonal things that can get in the way of metacognitive skill building.
All told, I think infosec provides lots of great opportunities to build metacognitive skills in a domain relevant way. Most experts I talk to show some evidence of that, but they don't realize that's what they are doing and its embedded in lots of decisions they make.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
One of the things we struggle with in investigations as analysts is even talking about them in an educated way. Someone asks you how you found something and it's, "I looked in the logs". Well, no... you did a lot more than that! 1/
You identified a cue that made you think there were other related events to be found, and those events could indicate an attack. Then you considered which of those events would be most meaningful to disposing the timeline you found. 2/
After that, you formed an investigative question that helped you hone in on exactly what you're looking for. With the question formed, you queried the log evidence to return a data set that you hoped would provide an answer. 3/
Let's talk about PREVALENCE ANALYSIS. This is one of the most useful concepts for analysts to understand because it drives so many actions. Prevalence is basically what proportion of a population shares a specific characteristic. 1/
First, prevalence is often an anomaly detection technique. Let's say you've found a process running with a name you don't recognize. If it's running on every host on the network you might say it's more likely to be benign. 2/
If the process is only running on one or a couple of hosts, that could be a bit more suspicious. Is there a common thread between the hosts? A pattern? There's more work here. 3/
Upon notification of potential malware infection, SOC analysts tend to spend more time trying to confirm the malware infection, whereas IR/DF analysts tend to assume infection and move toward understanding impact.
Usually, this results in different investigative actions. Confirming infection focuses more on the leading portion of the timeline relevant to the current event. Exploring impact focuses more on the trailing portion of the timeline.
Sometimes the investigative actions can look the same, but that depends on the malware and how the infection presents. Even with similar investigative actions, the intent is different.
The natural thing for inexperienced analysts to want to do is jump to the worst case scenario and begin investigating that thing. After all, the bad thing is very bad! But, that's usually a bad idea for at least three reasons. 1/
First, all investigations are based on questions. You use existing evidence to drive questions whose answers you pursue in evidence. If there is no evidence that indicates the very bad thing, you are probably jumping the gun by looking for it. It's a reach. 2/
Second, the very bad thing is often very hard to investigate. Exfil is a prime example. The techniques for investigating and proving data exfil are often time-consuming and cognitively demanding. Now you're distracting yourself from the actual evidence you already have. 3/
Over and over again, I observe that highly skilled analysts do something that might seem counter intuitive, but is key to their success.
They constantly review the facts -- what they know. That's the current timeline and the relationships they've uncovered.
Inexperienced analysts resist this sometimes because it feels like it takes up a lot of time. But it's worth the time. This is where analysts discover timeline gaps, identify new investigative questions, and prioritize their next move.
As you might imagine, revisiting the facts depends highly on documenting what you know when you come to know it. That's a habit that gets formed over time but can form faster if you understand the value and are deliberate about it.
For threat hunting, a non-trivial amount of the work is referencing, creating, and updating system and network inventory. This doesn't get talked about enough as a skill set that someone develops. 1/
Threat hunting is all about finding anomalies that automated detection mechanisms don't find. That means manual anomaly detection, which sometimes means weeding out things that are normal. 2/
For example, let's say you discover a binary that runs in the middle of the night on a host and that's weird! So, you eventually search for the prevalence of that behavior and see it running on other hosts in that department. 3/