Big news!

After a long wait, I'm excited to publicly release my doctoral dissertation, "The Analyst Mindset: A Cognitive Skills Assessment of Digital Forensic Analysts".

You can download it here: chrissanders.org/2021/12/disser….
In the accompanying blog post, I also talk a bit about how I came to this research area, why I think it's important, and a little bit of what's next. While my doctorate is a terminal degree, my dissertation is a beginning toward more things to come. 2/
With that in mind, let me walk you through a high level overview of my research and findings here. This will be a long thread and pretty high level since it's nearly a 200 page document. 3/
Keep in mind that this is a research document and not a teaching document. While I cover a lot of ground for this type of research, the findings are still narrowly scoped. They tell a story, but not a complete story. That's what my classes are for. 4/
In my literature review I describe the problem I'm trying to solve and why there is a gap in knowledge here. 5/
Cyber security is still young and very complex. The industry is in a state of cognitive crisis characterized, in part, by a lack of understanding skilled analyst performance and too much tacit knowledge. 6/
The effects of this cognitive crisis result in a skills shortage. Not a people shortage, a shortage of people with the right skills. We can see this in many places, but a common one is the lack of available entry-level jobs. 7/
I am mostly concerned with the investigative skillset -- the domain of the analysts who identify and understand attacks. I identify 3 roles that are primary investigation-focused and 3 roles that leverage some investigative skillsets in support of the broader investigation. 8/
Industry and academia stink at teaching people to be analysts. This is in part because the people who are good at investigative work are not great at explaining why they are good at it or how they do their jobs. I don't blame them... that's a hard thing to do. 9/
There is also no meaningful, detailed accounting of the cognitive skills that analysts rely on. Some govt led efforts have been made that relate (DoL CCM, NSA-CAE, NICE) but they aren't detailed enough or don't acknowledge the investigative skillset as a unique domain. 10/
Universities struggle here for many reasons. A lot of them treat security as a child of computer science and rely on those faculty to teach it. At best, security as a practice is a cousin of CS. It is a unique domain that warrants unique approaches. 11/
Universities also struggle to identify WHAT to teach. They may rely on industry surveys or the govt led efforts whose issues I've already highlighted in this threat. Some just assimilate strategies from other colleges (made up of the firs two things). 12/
And, of course, a lot of them don't have a program-wide strategy at all. They hire faculty and let them wing it based on their experience. That goes back to the first problem of analysts not being able to identify or explain those processes. 13/
At the industry level, a very small amount of work has been done to identify analyst cognitive skills but it usually lacks academic rigor, is heavily biased, and companies are unlikely to share that information. 14/
So, this is where we are. We need to be able to teach people how to be analysts but we don't know what to teach them. That is why my research was and is necessary. Let's talk about how I designed it. 15/
My research questions were:
1. What procedural skills do experienced digital forensic analysts use during investigations?
2. What decision-making skills do experienced digital forensic analysts use during investigations?
16/
I chose to focus on procedural and decision making skills because they are somewhat fundamental. I need to identify these to identify other more nuanced skills. Also, these combined can help produce a unified model. Decisions lead to procedures lead to more decisions, etc. 17/
Cognitive Task Analysis is a collection of techniques that allow researchers to uncover cognitive skills (things in your head) from practitioners. I used two techniques: PARI and CDM. 18/
PARI stands for Precursor, Action, Results, and Interpretation and it's designed to uncover procedural skills. For this, I designed a tabletop investigation scenario that participants worked through verbally with me. 19/
CDM stands for Critical Decision Method and its designed to uncover facets of decision making skills. For this, subjects described novel investigations they worked and I identified decision points and asked probing questions related to them. 20/
This diagram provides an overview of my research method. I conducted the PARI and CDM collection and analysis separately analyzing results within and across cases. Then I synthesized the results to build a model of skilled analyst performance. 21/
My experience as a practitioner first enabled this research method. Someone without that experience would not have been able to do things like designing a realistic tabletop exercise or speak at the level of detail required to understand the nuance of the analysts processes. 22/
Once I finished the analysis, I provided the results back to the analyst subjects for review and feedback. After all, they are experts. Member checking is super useful for qualitative research in situations like this and helps establish reliability. 23/
In total, I had 9 subjects:
3 triage/SOC analysts
3 incident responders
3 forensic examiners

I used this spread because I wanted to uncover investigative skills universal to these roles. This is the knowledge domain I'm establishing. 24/
On average, participants spend 86% of their day performing investigations. The average years of IT and Security experience were 10.8 and 13.2 respectively. The average age was 38.

Now for the part folks care about... results! 25/
Starting with procedural skills, I identified 308 investigative actions, 210 precursors, and 95 interpretations. From my decomposition of that data, I uncovered 4 unique skill domains: inquiry, evidentiary, anomaly detection, and network mapping / attack visualization skills. 26/
Inquiry skills were all about the analysts forming investigative questions based on evidence to decide where they want to go next. Questions manifested the results of analysts sensemaking of evidence and pointed them toward additional evidence. 27/
Said another way, analysts interpreted evidence, forecasted potential events, established a hypothesis, and decided which evidence could prove/disprove that hypothesis. The investigative question was the manifestation of that work. 28/
Investigative questions came in a few varieties. The most common were event-relative questions that were related to specific timeline events. They were usually preceding (leading to an event), succeeding (following an event), or about the context of the event itself. 29/
Analysts also asked proximate questions that combined preceding and succeeding questions (like, +/- 5 minutes from an event). This is how analysts typically establish correlating events between data sources. 30/
Another question type were capability matching questions, where analysts sought to confirm or refute the presence of a known threat capability. This meant trying to determine if specific malware, software, or threat actors were present on a system/network. 31/
Last were utility questions, whose purpose was to gather data needed to answer other questions. These demonstrate how analysts often leverage multiple evidence sources toward a single analysis goal. 32/
Along with these question types, I identified clusters of questions with related goals. I called these Directed Analysis Techniques (DATs). Based on the scenarios I provided, I identified 9 DATs. 33/
The DATs I identified were prevalence, directed execution, undirected execution, reputation, malware capability, lateral movement, staging and exfil, phishing, and account role analysis. These won't be exhaustive but I have a framework for identifying more now. 34/
Next were evidentiary skills. These were skills that analysts leveraged to interpret meaning from evidence to form and answer questions. Dimensions included interpretation, capability comprehension, collection, and manipulation. 35/
After that were Anomaly Detection skills that focused on analysts ability to perform pattern matching of many sorts to assess the disposition of events. I observed the use of many discrete techniques. 36/
Anomaly detection techniques included baseline comparisons, frequency assessment, excluding known benign, timing, and syntax. Again, not exhaustive (based on my scenario), but I have a framework for identifying more. 37/
Finally, there was some evidence of analysts using cognitive visualization skills to map networks and attackers' movement through them. This most closely resembled something akin to a graph database with nodes and edges but occurred at a few different levels of abstraction. 38/
Now, on to Decision Making skills! I identified 100 decision points across my subjects' scenarios. I focused on decision cues (things that led to decisions) and goals (what they accomplished). 39/
I uncovered four cue types. First were relational cues that indicated the presence of other, yet to be discovered relationships relevant to the attack timeline. 40/
Next were dispositional cues that indicated whether an event or relationship was suspicious or benign.

Third were novelty cues that indicated the presence of some unknown threat, capability, or technology that the analyst did not understand well. 41/
Fourth were operational cues that came from an external input and indicated a potential to affect the analyst's ability to conduct the investigation. 42/
All these cues were things analysts recognized that compelled them to make a decision (usually about what to do next). Part of the analyst's skill was their ability to recognize these cues for what they were. Notable here that the same data can serve as multiple cue types. 43/
Now for decision goals -- I found six goal types: identify relationships, assess threat capability, assess forensic capability, perform bulk collection, take response action, and seek advice or help. 44/
As you might imagine, certain types of cues often led to specific decision goals. For example, relational cues typically led to decisions whose goal was to identify a relationship. Similarly, novelty cues often led to decisions whose goal was to assess a threat capability. 45/
Now for the finale... 46/
By identifying procedural and decision-making skills I was able to create a model that explained the workflow of expert analysts. The model is called Diagnostic Inquiry. 47/
Diagnostic because the goal is to find out what happened and assess a disposition of those events. Inquiry because forming investigative questions is the mechanism that moves the process forward. 48/
This is a much more detailed version of the initial model of diagnostic inquiry that I have written about before, based on my findings in the dissertation research. 49/
Using this model, I am able to walk through the investigation process of analysts and effectively describe the skills they leverage to conduct their investigation. 50/
It's notable that when I teach the model, I use a simplified version that looks different from this one. That's part of the point of this research, to establish something for which to create mental models useful for teaching and learning. 51/
That's pretty much it from a high level (he says 50 tweets later). If any of this interests you, I encourage you to read the paper. My blog post contains a guide to relevant sections based on your role (analyst, educator, researcher). chrissanders.org/2021/12/disser… 52/
And finally, my research is self-funded. If you find value in any of it (and want to see more), consider registering for one of my online classes where this work manifests (networkdefense.co/courses/) or donating to @RuralTechFund. 52/52

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Chris Sanders 🍯

Chris Sanders 🍯 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @chrissanders88

9 Dec
Golden Ticket update! We're just a hair shy of our $20K goal which unlocks a 10K bonus from @TrustedSec.

Help us get there by donating to @RuralTechFund or your local food bank and forwarding us the receipt. You'll be entered to win nearly $20k in free training and prizes.
The list of prizes and all the details on how to enter are here: chrissanders.org/2021/12/golden…
We just hit our $20K goal! That comes with a 10K match from @TrustedSec, who are also providing a free seat in one of their training courses to the golden ticket winner. Thanks @HackingDave and crew!
Read 4 tweets
8 Dec
Folks often ask me about the most important data sources for network defense. That question usually requires some unpacking and winds up as one of these:

1. Most important for detection

2. Most important for analysis

3. Most important for career growth

1/
I wrote a whole chapter about choosing the best data sources for collection in my book Applied Network Security Monitoring. I distinguished between detection and investigative value, but I think I would approach that chapter a bit differently if I were writing it today. 2/
In Applied NSM I introduced something called the Applied Collection Framework. The gist was that you should assess your fears and risks to the network that you're defending and work backwards from that to identify important data sources. 3/
Read 21 tweets
7 Dec
My friends, the come has come. This holiday season I'm giving away a golden ticket that grants free entry into ALL my training courses and tons of other amazing prizes.

All the details are here: chrissanders.org/2021/12/golden…
If you find my golden ticket, you win:
- A free seat in every @NetworkDefense training course
- A free seat in one course to give to a friend
- A signed copy of all my books
(more...)
- 2 free seats in @DragosInc “Assessing, Hunting, and Monitoring Industrial Control Systems” course (in person or online, for you and a friend)
- A free seat in a @TrustedSec online course
- A super secret and totally awesome prize from me

That's nearly $20,000 in prizes.
Read 20 tweets
16 Nov
The most common action an analyst will take is performing a search. Usually in a tool like Security Onion, Splunk, Kibana, and so on. The second most common action an analyst will take is pivoting. That term gets used a lot, but what exactly does it mean? 1/
In the investigative context, analysts pivot when they perform a search in one evidence source, select a value from that search, and use it to perform another search in a different evidence source. 2/
For example...
1. An analyst searches in flow data to see who communicated with a suspicious IP.
2. They get a result and identify a Src IP.
3. They search in PCAP data for the Src IP / Dst IP pair to examine the communication. 3/
Read 20 tweets
15 Nov
An interesting study on the effects of prediction error on how people update their beliefs on topics.

Overview article: psypost.org/2021/11/psycho…

Research article: scholar.princeton.edu/sites/default/…

Relevance here to combatting misinformation.
The gist of the findings is that folks are more likely to change their mind on a topic when asked to make a prediction about some facts relevant to the topic and subsequently finding out their prediction was false.
Further, the magnitude of the prediction error is notable:

"we found that prediction error size linearly predicts rational belief update
and that making large prediction errors leads to larger belief updates than being
passively exposed to evidence"
Read 19 tweets
12 Nov
As one of my last doctoral coursework presentations, I spent time talking to my colleagues about the ethical dilemmas surrounding offensive security tool release. The outsider input was fascinating. Here's a thread to share some of that... 1/
Now keep in mind, my colleagues here are primarily educators. K-12 and university teachers, administrators, educational researchers, and so on. A few industry-specific education people as well, but none from infosec like me. 2/
My goal was to present the issue, explain why it was an ethical dilemma, and collectively discuss ethical perspectives that could influence decision-making. I withheld any of my opinions to let them form their own but gave lots of examples of OSTs and their use. 3/
Read 27 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(