There's often interesting public discussion about vendor detection tools and what they detect vs expectations. There's some interesting decision making that happens behind the scenes at these vendors when it comes to how they manage detection signatures. A thread... 1/
At a vendor, when you build a detection ruleset for lots of customers, you have to approach things a bit uniquely because you don't control the network where these rules are deployed and can't tune them yourself. 2/
One facet of this challenge is a decision regarding how you prioritize rule efficacy...we're talking accuracy/precision and the number of false positives that analysts have to investigate and tune. 3/
There are two ends of the spectrum here. You can...
Write only very specific rules with no false positives. But, you'll miss things.
or
You can write rules with very wide coverage. But you'll get a lot of false positives that require tuning. 4/
Pretty early on, most vendors generally see this as a dichotomy, and a choice that they need to specifically make because the folks writing these rules need guiding principles to work from.
They usually choose the lower coverage / lower FP route. Here's why... 5/
First, they know realistically, that most customers can't tune the rules they provide. They simply don't have the staff, expertise, or even visibility. Yes, that probably indicates many other problems, even for true positives. 6/
Second, some customers refuse to tune the rules. They believe the vendor should do this for them. Of course, that's usually not realistic because this tuning requires detailed network knowledge and access to data the vendor doesn't have. 7/
Third, sales pressure. Execs make the call bc they don't want to lose sales from #1 and #2. They know that many customers will complain or kick the vendor out (whether justified or not). Less FPs > More coverage from their perspective. 8/
Of course, that line of thinking is usually optimizing for the short term. Opinions change when a customer gets popped bc a tool didn't have coverage for something. A lot of these folks only think quarter to quarter. 9/
Fourth, there is a precedent with this from AV vendors. While IDS kinda grew up looking down at AV, lots of modern companies have AV vendor roots and take a similar approach with their detection strategy. Likewise, many customers are not conditioned to expect differently. 10/
Fifth and finally, it's harder to build tools that give people what they need to tune the rules. We're talking exclusion lists, time and criteria-based suppressions, and so on. It's more UX and dev time. 11/
I led one of these rule teams for a large vendor, which is a pretty small club of people. I experienced a lot of these things and heard many similar stories from my contemporaries. It's usually some combination of them and not just one. 12/
I don't say this to take up for vendors -- because honestly, I think an approach focused exclusively on either strategy is problematic and holds the industry back. 13/
I think the best model involves multiple rule sets. One that is high efficacy and on by default, and a second with broader coverage that requires tuning. 14/
Of course, you have to give folks the tools to do that tuning. Again, that means having detection engineers and analysts in the room with UX people and allocating dev time to make it possible. 15/
BTW, you can often tell which approach vendors take through their marketing. If they market their low FP rates, it's probably because of less coverage. If they market catching everything, expect to do lots of tuning. 16/
Last but not least -- this is also where I remind folks that an alert is never an answer -- it's a question. It's the analyst's job to find the answers.
While we're doing a Detection Engineering AMA, how do you build these sorta skills if you want to do that job for a living? Big question, but I'd focus on three areas for early career folks...
Investigative Experience -- Tuning detection involves investigating alerts from signatures so you need to be able to do that at some level. A year or two of SOC experience is a good way to start.
Detection Syntax -- You have to be able to express detection logic. Suricata for network traffic, Sigma for logs, YARA for files. Learn those and you can detect a lot of evil. They translate well to vendor-specific stuff.
This relates to my 4th and 5th reasons why these decisions happen -- AV company tactics and giving folks what they need to tune rules. That actually means GIVING analysts the rule logic. I could go on and on about this.
Most companies don't want to give out their rule logic because they see it as a sensitive trade secret. This is nonsense. A rule set isn't a detection companies most valuable intellectual property, it's their processes for creating those rules and the staff that do the work.
Limiting access to detection logic makes it harder for your customer. It is MUCH more difficult to investigate alerts when you don't know what they are actually detecting and how they're doing it.
I've seen many good analysts give clear, compelling explanations as to why tuning is important but fail to convince the decision-makers that this needs a dedicated person or a day a week from an existing person.
The thing that needs to become more commonly accepted is that if you decide your company needs a SOC, then that has to include a detection tuning capability. It also needs to be run by people who've seen this thing work well.
Some of these are companies that developed their own "standard" for expressing detection logic and don't even use it in most of their tools 😂
This comes from a lot of places. Usually, someone develops a detection tool by themselves or part of a small or isolated team and they choose what they want, then the project grows and it becomes painful to change it.
I'm really excited to share that our newest online class, Detection Engineering with Sigma, is open this morning. You can learn more and register at learnsigmarules.com.
The course is discounted for launch until next Friday.
If you're not familiar with @sigma_hq, you should be! It's the open standard detection signature format for logs. Said another way, Sigma is for logs what Snort/Suricata are for network traffic and YARA is for files.
Perhaps the best thing about Sigma is that you can easily convert its rules into LOTS of other formats using the Sigmac tool. Things like Elastic, Splunk, Graylog, NetWitness, Carbon Black, and so on.
One of the unique challenges of forensic analysis is that we're focused both on determining what events happened and the disposition of those events (benign or malicious). A failure to do one well can lead to mistakes with the other. 1/
Generally speaking, analysts interpret evidence to look for cues that spawn more investigative actions. Those cues can be relational (indicate the presence of related events), dispositional (indicate the malicious or benign nature of something), or even both at the same time. 2/
Not only do we have to explore relationships, but we also have to characterize and conceptualize them. That means we're constantly switching between cause/effect analysis and pattern matching of a variety of sorts. 3/