Some of the work I'm most proud of from my time at Mandiant was pioneering the building of investigative actions *into detection signatures* as they were written. This had profound impact across the detection and product teams, and made the tool so much more accessible.
We included two main components: a plain English version of the question that the analyst needed to answer, and the search syntax that would provide data to answer the question. It manifested in the UI as something I named "Guided Investigations".
This helped detection engineers write much better signatures because they had to think more deliberately about the consumer of the alerts (the analyst). It led to higher quality detection logic and clearer metadata, including rule descriptions.
All those things really helped to limit the impact and confirmation bias that signature metadata can cause.
When we committed to this idea, any rule that DE's wrote had to include guided investigation steps.
From a product team standpoint, it helped make the non-security devs and other folks grasp the meaning of the search syntax and investigative workflow, which could be complex and overwhelming. They understand the analyst's workflow better.
For the analyst users themselves, it was a game-changer. For experienced analysts, the guided investigation steps allowed the app to automatically retrieve the information we knew they would want. It automated some mundane tasks while also providing occasional new insight.
For less experienced analysts, guided investigations actually *taught them* how to be better analysts. They learned the value of the investigative questions we gave along with the signatures as they reviewed the resulting data and found relevant events.
Good software helps experts do things, but great software teaches people to be experts. This was our good to great step for the analysis and investigation portion of the platform.
I think most signature formats should have support and guidelines for adding investigative steps as metadata into the detection rule. That includes popular formats like Suricata (network), Sigma (logs), and Yara (files).
I might stop short of requiring public signature contributors to add investigation steps to all the rules they make before they are accepted (not all DE's are good analysts). But, I do think an effort to add investigative steps into public rules is worth the time.
Psychologically, it's worth noting that the inclusion of these steps also contributes to decreasing noise among investigating analysts. That is, reducing the variability of disposition judgments multiple analysts make from the same input.
This strategy also reduces bias, and particularly the sort that analysts often possess as it relates to specific types of evidence sources (host vs. network, etc).
We won't see machines solving investigations for us in my lifetime. But, we can use their strengths to make human analysts more efficient.
And that augmentation? It's helping human analysts to do their jobs faster and more accurately. But, it's also helping them develop expertise, reduce noise, and limit the effects of bias.
I'm seeing more folks start to embrace this idea now, which is exciting. The folks at @expel_io are a good example. @jhencinski and some others there were around at Mandiant while I was doing some of this initial work and were supportive.
It might sound odd, but at the time, focusing on the human part of the investigation was pretty out of the box and not met with universal support. All most folks wanted to talk about was threat intel and malware.
So folks like John, Peter, and a few others were very curious when I would talk about the idea of investigative questions, a cognitive framework for analysis, building investigation steps into signatures, and deliberate strategies to reduce bias.
A lot of those things get talked about more commonly now, which is great. It's also cool to see some of those seeds that were planted blossom into products that don't just work, but also move the practice forward. The Expel platform is a pretty good example that's quite powerful.
There has also been some great work over the past couple of years by the @securityonion team and @DefensiveDepth to build investigation steps into their workflow through their playbook feature: docs.securityonion.net/en/2.3/playboo….
Overall, the Sec Onion team is doing a great job and moving towards powerful things. I think that'll continue to be impressive now that they're doing some of their own UI development, which is where many of these ideas manifest.
Slowly, more vendors are embracing human-centric and cognition principles in their product work, even if they don't know that's what they're doing. I want to see that become more deliberate.
I also want to see the approach embraced more by maintainers of detection technology, and particularly the widely accepted community-driven detection signature formats. These folks can steer the investigation process that happens post-detection in more ways than they think!
I generally define all of these approaches as part of cognition-centric analysis.
Intrusion analysis is a function of cognition... it's learning. If you understand how humans learn, you understand a lot of how investigations work.
This is a human-focused approach that acknowledges the psychological strengths and weaknesses of the analyst and seeks to augment their cognitive processes. Machines and automation are for that purpose alone, augmenting the human.
Intrusion analysis is no more about a computer than astronomy is about a telescope. The computer is the tool, the human is the star. We should seek to develop expertise, build diversity of experience, reduce noise, and limit the effects of bias.
I did not plan to get up this morning and give an exposition about cognition, analysts, and guided investigations.... but here is where we find ourselves 😂
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.
