Since I spend so much time talking to and researching SOCs and SOC analysts, I often get asked, "What the biggest difference is between high and low growth SOCs?"
The answer? Expectations.
1/
First, what do I mean by growth? I'm talking about places where analysts can grow their abilities. These are the places that take complete novices and help them achieve competence or take experienced analysts and help them specialize. Growing human ability.
2/
The organizations that support growth well are those where leadership has high (but realistic) expectations for analysts. These most often center around expecting the analyst to be able to make reliable, evidence-driven decisions confidently.
3/
I remember teaching in an org once where I posed a scenario and told an analyst "What if you couldn't explain why this happened?"
They didn't understand -- "Why wouldn't I be able to explain what happened?"
4/
This was a person only a couple of years into their career. Their leadership had an expectation that analysts made decisions based on what they knew DID and DO NOT happen. They also gave them the tools and data to get there.
5/
When I see organizations that don't have the right expectations, it is primarily due to putting the wrong people in management positions. Poor access to tools and data is usually a symptom of that.
6/
This is a good time to remember that computer security is not science -- there's no true phenomenon whose explanation is outside our capabilities. It's engineering. You can explain everything a computer does with the right people in the room.
7/
When analysts have that mindset and the folks around them who support it, their growth is multiplied. They take ownership of the task of understanding why things happen and when they do that, their confidence soars. It's a game-changer.
8/
The work of the analyst is hard. Very hard. Experienced folks forget that because of the curse of knowledge -- it's hard to remember what it was like not to know something. But, this work is still primarily engineering of known quantities. The questions ARE answerable.
9/9
Also, if you're the sort of analyst who sets high expectations of yourself even when your manager doesn't, good on you. You can lead from the bottom. Folks will recognize your success and it might not be long before you're the manager and can set better expectations for all. 10/9
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Like lots of folks, I'm pretty miffed by the lack of robust virtualization support on Apple M1 hardware. I hope that gets fixed soon. But, it also got me to thinking about decision making at big vendors like Apple and others.
1/
For example, the security community (myself included) is often critical of Microsoft for some of their decision-making when it comes to usability/flexibility vs. security. Two things immediately come to mind...
2/
1. Macros. The idea that they exist, are default usable, and the UI pushes users more toward enabling them than disabling them.
2. Default logging configs. Fairly minimal with lots of sec relevant stuff left out (integrate sysmon already!).
3/
A lot of tips about good writing are rooted in the psychology of your reader. For example, if you want your reader to understand a risk (a probability), is it better to express that as a relative frequency (1 in 20) or a percentage (5%)?
1/
Typically, people understand risk better as a frequency. For example, consider the likelihood of a kid dropping out of high school. You could say that 5% of kids drop out, or that 1 in 20 does. Why is the latter more effective?
2/
First, it's something you can more easily visualize. There's some evidence you might be converting the percentage into the frequency representation in your head anyway. Weber et al (2018) talked about this here: frontiersin.org/articles/10.33…
3/
Abstractions are something analysts have to deal with in lots of forms. Abstraction is the process of taking away characteristics of something to represent it more simply. So, what does that looks like? 1/
Well, speaking broadly, let's say that I tell you I had scrambled eggs with parsley and tarragon for breakfast. You can probably picture that very clearly in your mind and it will be fairly accurate to reality. However... 2/
What if I just tell you I just had eggs? Or that I just had breakfast? Your perception of reality may differ greatly from what I actually ate. The abstraction increases opportunity for error.
One of my research areas that I write about often is curiosity and how it manifests in infosec education and practice. A topic that relates to curiosity is Boredom, which I've done some recent reading on. I thought I'd share a bit about that. 1/
First, what is Boredom? A consensus definition is that boredom is the uncomfortable feeling of wanting to engage in satisfying activity without being able to do so. 2/
When you're bored, two things happen: 1. You want to do something but don't want to do anything. 2. You are not mentally occupied in a way that leverages your capacities or skills.
Let's talk about some lessons gathered from how a student over the weekend quickly went from struggling on an investigation lab and...
"I'm stuck"
to finished and...
"I don’t know if you just Yoda’d the hell out of me or what"
1/x
This particular student emailed and said they were stuck and gave me some misc facts they had discovered. I responded and asked them to lay out a timeline of what they knew already so that we could work together to spot the gaps. 2/
The truth is that when this inquiry is taken seriously, it doesn't often result in us having to spot those gaps together at all because the student figures it out on their own. Why does this happen? Two main reasons... 3/
One of the things I absolutely love about our new @sigma_hq course is that a final challenges includes building your own new rule (we provide a bunch of ideas) with the option of actually submitting it to the public repo. Folks learn and contribute community detection value.
@sigma_hq As part of that, @DefensiveDepth walks students through the process, even if they've never used git before. The Sigma community also does a great job of providing input and additional testing.
It's awesome to watch it all come together. I'm looking at a rule in the public repo now written by a student who didn't know anything about Sigma a month ago. It's been tested, vetted, and now it'll help folks find some evil.