Like lots of folks, I'm pretty miffed by the lack of robust virtualization support on Apple M1 hardware. I hope that gets fixed soon. But, it also got me to thinking about decision making at big vendors like Apple and others.
1/
For example, the security community (myself included) is often critical of Microsoft for some of their decision-making when it comes to usability/flexibility vs. security. Two things immediately come to mind...
2/
1. Macros. The idea that they exist, are default usable, and the UI pushes users more toward enabling them than disabling them.
2. Default logging configs. Fairly minimal with lots of sec relevant stuff left out (integrate sysmon already!).
3/
In both of those cases and others, at some point, MS is making a decision that leans toward usability/flexibility and away from security. From what I understand, there are small (but not that small) groups these things REALLY matter for.
4/
But, let's compare that to the Apple M1/VM issue. That really matters for a small (but not that small) group too. What is similar about those decisions?
5/
I think most folks probably recognize financial impact first. How much money does the company actually lose if they lose the affected group? I don't really know. But I'm sure it comes into play.
6/
However, let's change a couple of words. How much money does the company PERCEIVE they lose if they lose the affected group?
7/
What I'm getting at here is that some groups are better organized to make their dissatisfaction heard and affect change. Regardless of actual losses (not seen until later), organized groups make expected losses (seen now) more obvious and tangible.
8/
In general, I think information security would be considered a group that is not well organized. We are individually loud, but not so much collectively. At least right now.
9/
Constituents of the groups that push against infosec-related interests are often better organized and make their collective voices heard more easily. Big companies, entire industries, and so on.
10/
I think many in infosec would see Apple or MS as a company that should be "one of us" and default to the best interest of security more often than not on these sort of tradeoffs. Of course, that is not the case.
11/
This gets to something that is certainly not a new question... "How does infosec unify their collective voices to be heard *and prioritized* more?"
I just thought this was an interesting case study that relates to it.
12/
The same question is had at the individual organization and even team levels. But here, I'm talking specifically as it relates to interactions with the few big vendors on whom we all rely. The folks in the best position of anyone to fundamentally improve global security.
13/
Answering the question is a big one. We tend to reject a lot of the traditional mechanisms other groups use to do those sorts of things -- standards bodies, licensing orgs, and so on. Some good reasons for that, but we miss some benefits too.
14/
I do think that organization is a key element of the answer. I also believe that organization in this sense is, in significant part, an education problem. But, I admit that I believe most things break down to education problems. 😂
15/
Anyways, that's how I get from yesterday's Apple event to decision theory to fundamental infosec identity crisis problems. I hope y'all are having a great Tuesday so far 🤣
16/16
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Since I spend so much time talking to and researching SOCs and SOC analysts, I often get asked, "What the biggest difference is between high and low growth SOCs?"
The answer? Expectations.
1/
First, what do I mean by growth? I'm talking about places where analysts can grow their abilities. These are the places that take complete novices and help them achieve competence or take experienced analysts and help them specialize. Growing human ability.
2/
The organizations that support growth well are those where leadership has high (but realistic) expectations for analysts. These most often center around expecting the analyst to be able to make reliable, evidence-driven decisions confidently.
3/
A lot of tips about good writing are rooted in the psychology of your reader. For example, if you want your reader to understand a risk (a probability), is it better to express that as a relative frequency (1 in 20) or a percentage (5%)?
1/
Typically, people understand risk better as a frequency. For example, consider the likelihood of a kid dropping out of high school. You could say that 5% of kids drop out, or that 1 in 20 does. Why is the latter more effective?
2/
First, it's something you can more easily visualize. There's some evidence you might be converting the percentage into the frequency representation in your head anyway. Weber et al (2018) talked about this here: frontiersin.org/articles/10.33…
3/
Abstractions are something analysts have to deal with in lots of forms. Abstraction is the process of taking away characteristics of something to represent it more simply. So, what does that looks like? 1/
Well, speaking broadly, let's say that I tell you I had scrambled eggs with parsley and tarragon for breakfast. You can probably picture that very clearly in your mind and it will be fairly accurate to reality. However... 2/
What if I just tell you I just had eggs? Or that I just had breakfast? Your perception of reality may differ greatly from what I actually ate. The abstraction increases opportunity for error.
One of my research areas that I write about often is curiosity and how it manifests in infosec education and practice. A topic that relates to curiosity is Boredom, which I've done some recent reading on. I thought I'd share a bit about that. 1/
First, what is Boredom? A consensus definition is that boredom is the uncomfortable feeling of wanting to engage in satisfying activity without being able to do so. 2/
When you're bored, two things happen: 1. You want to do something but don't want to do anything. 2. You are not mentally occupied in a way that leverages your capacities or skills.
Let's talk about some lessons gathered from how a student over the weekend quickly went from struggling on an investigation lab and...
"I'm stuck"
to finished and...
"I don’t know if you just Yoda’d the hell out of me or what"
1/x
This particular student emailed and said they were stuck and gave me some misc facts they had discovered. I responded and asked them to lay out a timeline of what they knew already so that we could work together to spot the gaps. 2/
The truth is that when this inquiry is taken seriously, it doesn't often result in us having to spot those gaps together at all because the student figures it out on their own. Why does this happen? Two main reasons... 3/
One of the things I absolutely love about our new @sigma_hq course is that a final challenges includes building your own new rule (we provide a bunch of ideas) with the option of actually submitting it to the public repo. Folks learn and contribute community detection value.
@sigma_hq As part of that, @DefensiveDepth walks students through the process, even if they've never used git before. The Sigma community also does a great job of providing input and additional testing.
It's awesome to watch it all come together. I'm looking at a rule in the public repo now written by a student who didn't know anything about Sigma a month ago. It's been tested, vetted, and now it'll help folks find some evil.