The most common action an analyst will take is performing a search. Usually in a tool like Security Onion, Splunk, Kibana, and so on. The second most common action an analyst will take is pivoting. That term gets used a lot, but what exactly does it mean? 1/
In the investigative context, analysts pivot when they perform a search in one evidence source, select a value from that search, and use it to perform another search in a different evidence source. 2/
For example... 1. An analyst searches in flow data to see who communicated with a suspicious IP. 2. They get a result and identify a Src IP. 3. They search in PCAP data for the Src IP / Dst IP pair to examine the communication. 3/
Another example... 1. An analyst searches in proxy data to identify visits to uncategorized domains. 2. A result shows a download from an odd looking domain and includes the username of the source. 3. The analyst searches for executions in Windows event logs for this username. 4/
One more... 1. An analyst searches in Sysmon logs to find executed files on a host. 2. A result shows an unknown file name and includes a hash. 3. The analyst searches for the file hash in a public sandbox to try and ascertain the file's behavior. 5/
Pivots are how analysts connect evidence sources to answer their investigative questions. Most analysts start to learn this skill by taking simple indicators and searching for them in OSINT reputation sites or Google. 6/
As analysts progress in skill, they start to get better at within-realm pivoting. Things like connecting flow to PCAP, an app log to an OS log, one threat intel source to another, and so on. 7/
When I say realm, I'm referring to how I classify different evidence sources into 1 of 6 realms (not going to detail those here). 8/
Over time, analysts get better at cross-realm pivoting. That might include things like going from a proxy log to a Windows event log, going from the registry to a file, or going from an event log to memory. 9/
Cross realm pivots take a bit longer to learn because that means understanding a greater diversity of evidence sources. Also, tools don't always support these pivots as intuitively. 10/
Speaking of tools, a defining characteristic of good ones is that they help analysts pivot easier. For example, here's a screenshot from Security Onion where you can pivot to a few different things. 11/
Some tools have done this for a long time, particularly for within-realm network pivoting. Here's a screenshot from Sguil where you've got these options. 12/
In some rare cases, the tools that generate the data also provide added fields that make pivoting easier. An example I often refer to is the Zeek UID fields. They allow you to pivot between all the log files it generates, which is incredibly useful. 13/
Ideally, tools would let folks define their own pivots based on the data sources they have mapped. Put those in a context menu that changes based on the field you click on, and you're in business. 14/
Some pivots are a lot more common than others, and some manifest more commonly in specific types of investigations. These provide opportunities to use automation and SOAR-like things to give analysts easier access to those pivots (ala guided investigations). 15/
The analyst role is cognitively defined by the formation of investigative questions and interpretation of evidence. But from an observational viewpoint, these pivots are how you see those things manifest. 16/
To get good at pivoting, you have to understand the capabilities of your evidence (what questions it can answer), comprehension of evidence (what relationships it represents), how to collect it, and how to manipulate it. 17/
The more evidence sources whose capabilities and interpretation you understand, the more you'll be able to pivot into those things. 18/
It's important to distinguish pivoting from the analyst perspective from that of the attacker. The former is about interpreting data from multiple sources, while the latter is about leveraging access on one host to gain access to another. 19/
That's pivoting. Perform a search in one evidence source, select a value from that search, and use it to perform another search in a different evidence source.
It's a simple concept that most analysts leverage constantly but isn't always defined so clearly. 20/20
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The gist of the findings is that folks are more likely to change their mind on a topic when asked to make a prediction about some facts relevant to the topic and subsequently finding out their prediction was false.
Further, the magnitude of the prediction error is notable:
"we found that prediction error size linearly predicts rational belief update
and that making large prediction errors leads to larger belief updates than being
passively exposed to evidence"
As one of my last doctoral coursework presentations, I spent time talking to my colleagues about the ethical dilemmas surrounding offensive security tool release. The outsider input was fascinating. Here's a thread to share some of that... 1/
Now keep in mind, my colleagues here are primarily educators. K-12 and university teachers, administrators, educational researchers, and so on. A few industry-specific education people as well, but none from infosec like me. 2/
My goal was to present the issue, explain why it was an ethical dilemma, and collectively discuss ethical perspectives that could influence decision-making. I withheld any of my opinions to let them form their own but gave lots of examples of OSTs and their use. 3/
Although I had met Alan, I didn't know him well. However, his signature hangs on my wall as part of the SANS Difference Makers award he had a hand in awarding me in 2018. 1/
From what I know of him, he was a big part of making sure this award existed because he believed that we should use technology to make people's lives better, and a lot of his life was committed to that idea. I think that's a sentiment most of us can get behind. 2/
When we think of people whose contributions shaped the state of education in computer security, Alan is likely in the top portion of that list. When you consider the transformative power of education on people's lives, it's easy to see how many people he impacted in some way. 3/
It doesn't matter if you don't have a lot of teaching experience as long as you are well-spoken. I'll work with you and teach you principles of curriculum design and adult learning to help turn your expertise into effective learning content.
Here are some comments from a few of our course authors who I've worked with during this process so far.
I think one of the best 1-2 punches we've got going right now is our CyberChef course + our RegEx course. I consider both pretty necessary skills for analysts of multiple sorts (SOC, IR, and Malware RE).
CyberChef is maybe my most used tool in investigations these days other than the SIEM or a terminal. That course gives you a taste of regex but then the RegEx course makes you comfortable there. You also get a free copy of RegEx Buddy with that course.
You also get the strong 1-2 punch of Matt's Australian accent and Darrel's British accent 😍
Some of the work I'm most proud of from my time at Mandiant was pioneering the building of investigative actions *into detection signatures* as they were written. This had profound impact across the detection and product teams, and made the tool so much more accessible.
We included two main components: a plain English version of the question that the analyst needed to answer, and the search syntax that would provide data to answer the question. It manifested in the UI as something I named "Guided Investigations".
This helped detection engineers write much better signatures because they had to think more deliberately about the consumer of the alerts (the analyst). It led to higher quality detection logic and clearer metadata, including rule descriptions.