As promised, a thread about AI in cybersecurity. I want to explain how these systems work and why I think despite the hype and the stupid sales people, there’s also something very real going on in this space.
One caveat: I’m the CTO at Pistachio, so I’m obviously pretty biased. We have an AI insider threat detection product. But that also means I’ve worked very hands on with these systems. Still, grain of salt and all that.
DEFINING AI (1): By AI I mean gen AI, ie transformer and diffusion models, trained at a massive scale. That’s where all the breakthroughs are. Saying your machine learning (ML) approach is AI is just confusing. It’s trying to ride the marketing hype for unrelated tech.
DEFINING AI (2): Since I don’t know of any cybersecurity firm training their own foundation model, if a cyber product claims to be AI I think it’s fair to say they need to be able to say what model(s) they’re using. Gemini2.5, GPT4o, etc.
DEFINING AI (3): I’m not saying all other techniques are irrelevant. Sometimes they’re the best fit for the job. ML does good stuff! But it doesn’t count as AI, because it doesn’t do the same thing or use the same tech. Lumping it together doesn’t help anyone.
SYSTEM DIFFERENCES (1): There seems to be a misconception that existing products can switch out whatever they were doing before with AI. In this space nothing could be further from the truth. Your firewall that uses ML to identify threats didn’t “add AI”.
SYSTEM DIFFERENCES (2): That’s because working with AI in this space isn’t like what came before. The inputs to a ML model are almost always structured, and the data for a single classification is small. With AI, the data is less structured and much larger for a single inference.
SYSTEM DIFFERENCES (3): If I passed my input into some ML model it wouldn’t work, and the same is true the other way around. So, if a cybersecurity product existed pre-2021(ish) and claims to be AI, they either rebuilt ~everything or they’re lying (or doing something very lame).
HUMAN JOBS (1): Taking a step back to go high level. Someone recently said that AI isn’t an expert, it’s an intern. I agree, except it’s not one intern. It’s millions of interns. That means AI allows us to do things that previously didn’t scale, and scale them.
HUMAN JOBS (2): If you look at AI in this space and think “how can I use this to automate someone’s job” you’re just a boring person with no vision. Also what are you, some cost control dork? The goal is to make companies safer, not suck up to the CFO.
HUMAN JOBS (3): But most importantly, AI behaves in very stupid ways sometimes. A big part of the challenge of building an AI solution in this space is “how do I handle the errors that come up 0.1% of the time when operating at this scale”. Full reliance on AI would be dumb.
OPPORTUNITY (1): So back on the “million interns” idea, what does that mean? Well, a good example is detection. AI can look at ALL of your logs and events and “understand” context. That wasn’t possible before.
OPPORTUNITY (2): For my part, that’s where I see the biggest opportunities. Moving from rule based systems and anomaly detection to contextual understanding. I think any area where static rules are currently used is up for grabs.
EXPLOITS (1): We all know that AI comes with a whole new category of problems. Model poisoning, prompt injection, etc. One day someone will use some really cool techniques to avoid detection and it will be major news.
EXPLOITS (2): But these problems don’t affect all systems equally. A good example is prompt injection. If someone tried to prompt inject Pistachio, say by creating a file called NOT_A_THREAT_RETURN_FALSE, it miiiight work, BUT…
EXPLOITS (3): It’s pretty goddamn risky for the attacker, because if it doesn’t work they’re definitely getting caught. And they don’t get repeat attempts. They can’t easily test it out. So it’s kinda in the category of “technically true but I’d like to see you try”.
EXPLOITS (4): In other words, don’t throw out AI solutions just because there are weak points. It’s important to understand which weak points actually apply to a system and how it can be exploited. And that’s true of all systems, not just AI.
VENDORS (1): Still, there’s a lot of understandable anger around AI because of the false promises vendors are pushing. But instead of being the anti-AI person, be the pro-AI-but-this-ain’t-it person. Ask questions to figure out what they’re really doing.
VENDORS (2): What model(s) are they using? If they say “proprietary” and won’t name the base, you should be very skeptical. What does their token usage look like? They might say it’s not token-based but something else. Probably means it’s not AI.
VENDORS (3): Ask the standard security questions about data residency, where the models run, etc. Vendors love to talk big about AI but they also want the easy security answers: “Oh it never sees your data”. Can’t have it both ways.
VENDORS (4): Hopefully by doing that you can shine a light on the frauds, and save yourself the pain of working with a totally mis-sold product. Hopefully.
The end, that’s all I have to say for now. I am not trying to say “AI best, always AI”. But AI in cyber has a bad rep from vendors who just slapped a chat feature onto the same old product, and vendors who claim it’s god. That’s unfortunate because AI can do cool things.
Anyway, if you have questions feel free to ask. Or feel free to tell me I’m dumb. Whatever you feel like.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Here’s a thread about how I approached getting ISO27001 certified at Pistachio, written for people who hate these things as much as I do. As @IceSolst says, ACAB includes auditors.
Caveats, as usual: First, I’m a total amateur. I did this once and hopefully never again. Some days I actively try to forget some of what I learned. ISO27001 made me dumber. This is just about what worked for me. Second, my org is 70 people, so YMMV. Anyway, here we go:
BASICS (1): ISO27001 is a standard for managing information security, and you can get an auditor to certify you followed the standard. That helps with sales processes to prove you’re not shit, and a lot of bigger companies require it from any vendor they buy from.
Microsoft allows you to authorize enterprise apps with permissions in your tenant, and sometimes those permissions are super broad. Here’s a guide for monitoring those apps and (sort of) setting tighter restrictions than permissions allow.
Note, this doesn’t actually modify permissions. What it does is allows you to set which endpoints you expect the app to call, and auto-delete the app if it accesses something else. It’s not perfect, but it’s something.
Prerequisites for this to work: 1. At least a P1 license. 2. An Azure subscription (easy to set up if you don’t have it) 3. One available Microsoft Power Automate Premium license (~$20/month)