While we're doing a Detection Engineering AMA, how do you build these sorta skills if you want to do that job for a living? Big question, but I'd focus on three areas for early career folks...
Investigative Experience -- Tuning detection involves investigating alerts from signatures so you need to be able to do that at some level. A year or two of SOC experience is a good way to start.
Detection Syntax -- You have to be able to express detection logic. Suricata for network traffic, Sigma for logs, YARA for files. Learn those and you can detect a lot of evil. They translate well to vendor-specific stuff.
Attack Simulation -- You need to know how attacks manifest in files/logs/systems. Setup something like @DetectionLab and recreate attack scenarios, treating it like a science experiement. Also, get comfortable using malware sandboxes.
If you can do those things you can probably do this job (non-technical skills aside). No, you don't need to be able to write code. No, you don't need to be into AI/ML. Most detection engineering starts with signatures.
After that, it gets more specialized and evidence/platform-dependent.
You can learn these things on your own. You can also take classes to accelerate that learning (networkdefense.io).
Overall, I think detection engineering is one of the more accessible places for folks to join infosec bc the feedback mechanisms within teams are plentiful.
You can find these jobs with the big security vendors, larger localized SOCs, or managed service providers. They're pretty amenable to remote work, too!
• • •
Missing some Tweet in this thread? You can try to
force a refresh
This relates to my 4th and 5th reasons why these decisions happen -- AV company tactics and giving folks what they need to tune rules. That actually means GIVING analysts the rule logic. I could go on and on about this.
Most companies don't want to give out their rule logic because they see it as a sensitive trade secret. This is nonsense. A rule set isn't a detection companies most valuable intellectual property, it's their processes for creating those rules and the staff that do the work.
Limiting access to detection logic makes it harder for your customer. It is MUCH more difficult to investigate alerts when you don't know what they are actually detecting and how they're doing it.
I've seen many good analysts give clear, compelling explanations as to why tuning is important but fail to convince the decision-makers that this needs a dedicated person or a day a week from an existing person.
The thing that needs to become more commonly accepted is that if you decide your company needs a SOC, then that has to include a detection tuning capability. It also needs to be run by people who've seen this thing work well.
Some of these are companies that developed their own "standard" for expressing detection logic and don't even use it in most of their tools 😂
This comes from a lot of places. Usually, someone develops a detection tool by themselves or part of a small or isolated team and they choose what they want, then the project grows and it becomes painful to change it.
There's often interesting public discussion about vendor detection tools and what they detect vs expectations. There's some interesting decision making that happens behind the scenes at these vendors when it comes to how they manage detection signatures. A thread... 1/
At a vendor, when you build a detection ruleset for lots of customers, you have to approach things a bit uniquely because you don't control the network where these rules are deployed and can't tune them yourself. 2/
One facet of this challenge is a decision regarding how you prioritize rule efficacy...we're talking accuracy/precision and the number of false positives that analysts have to investigate and tune. 3/
I'm really excited to share that our newest online class, Detection Engineering with Sigma, is open this morning. You can learn more and register at learnsigmarules.com.
The course is discounted for launch until next Friday.
If you're not familiar with @sigma_hq, you should be! It's the open standard detection signature format for logs. Said another way, Sigma is for logs what Snort/Suricata are for network traffic and YARA is for files.
Perhaps the best thing about Sigma is that you can easily convert its rules into LOTS of other formats using the Sigmac tool. Things like Elastic, Splunk, Graylog, NetWitness, Carbon Black, and so on.
One of the unique challenges of forensic analysis is that we're focused both on determining what events happened and the disposition of those events (benign or malicious). A failure to do one well can lead to mistakes with the other. 1/
Generally speaking, analysts interpret evidence to look for cues that spawn more investigative actions. Those cues can be relational (indicate the presence of related events), dispositional (indicate the malicious or benign nature of something), or even both at the same time. 2/
Not only do we have to explore relationships, but we also have to characterize and conceptualize them. That means we're constantly switching between cause/effect analysis and pattern matching of a variety of sorts. 3/